1,495 research outputs found

    Advanced Mid-Water Tools for 4D Marine Data Fusion and Visualization

    Get PDF
    Mapping and charting of the seafloor underwent a revolution approximately 20 years ago with the introduction of multibeam sonars -- sonars that provided complete, high-resolution coverage of the seafloor rather than sparse measurements. The initial focus of these sonar systems was the charting of depths in support of safety of navigation and offshore exploration; more recently innovations in processing software have led to approaches to characterize seafloor type and for mapping seafloor habitat in support of fisheries research. In recent years, a new generation of multibeam sonars has been developed that, for the first time, have the ability to map the water column along with the seafloor. This ability will potentially allow multibeam sonars to address a number of critical ocean problems including the direct mapping of fish and marine mammals, the location of mid-water targets and, if water column properties are appropriate, a wide range of physical oceanographic processes. This potential relies on suitable software to make use of all of the new available data. Currently, the users of these sonars have a limited view of the mid-water data in real-time and limited capacity to store it, replay it, or run further analysis. The data also needs to be integrated with other sensor assets such as bathymetry, backscatter, sub-bottom, seafloor characterizations and other assets so that a “complete” picture of the marine environment under analysis can be realized. Software tools developed for this type of data integration should support a wide range of sonars with a unified format for the wide variety of mid-water sonar types. This paper describes the evolution and result of an effort to create a software tool that meets these needs, and details case studies using the new tools in the areas of fisheries research, static target search, wreck surveys and physical oceanographic processes

    RAB: Provable Robustness Against Backdoor Attacks

    Full text link
    Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks, including evasion and backdoor (poisoning) attacks. On the defense side, there have been intensive efforts on improving both empirical and provable robustness against evasion attacks; however, provable robustness against backdoor attacks still remains largely unexplored. In this paper, we focus on certifying the machine learning model robustness against general threat models, especially backdoor attacks. We first provide a unified framework via randomized smoothing techniques and show how it can be instantiated to certify the robustness against both evasion and backdoor attacks. We then propose the first robust training process, RAB, to smooth the trained model and certify its robustness against backdoor attacks. We derive the robustness bound for machine learning models trained with RAB, and prove that our robustness bound is tight. In addition, we show that it is possible to train the robust smoothed models efficiently for simple models such as K-nearest neighbor classifiers, and we propose an exact smooth-training algorithm which eliminates the need to sample from a noise distribution for such models. Empirically, we conduct comprehensive experiments for different machine learning (ML) models such as DNNs, differentially private DNNs, and K-NN models on MNIST, CIFAR-10 and ImageNet datasets, and provide the first benchmark for certified robustness against backdoor attacks. In addition, we evaluate K-NN models on a spambase tabular dataset to demonstrate the advantages of the proposed exact algorithm. Both the theoretic analysis and the comprehensive evaluation on diverse ML models and datasets shed lights on further robust learning strategies against general training time attacks.Comment: 31 pages, 5 figures, 7 table

    Certifying Out-of-Domain Generalization for Blackbox Functions

    Full text link
    Certifying the robustness of model performance under bounded data distribution drifts has recently attracted intensive interest under the umbrella of distributional robustness. However, existing techniques either make strong assumptions on the model class and loss functions that can be certified, such as smoothness expressed via Lipschitz continuity of gradients, or require to solve complex optimization problems. As a result, the wider application of these techniques is currently limited by its scalability and flexibility -- these techniques often do not scale to large-scale datasets with modern deep neural networks or cannot handle loss functions which may be non-smooth such as the 0-1 loss. In this paper, we focus on the problem of certifying distributional robustness for blackbox models and bounded loss functions, and propose a novel certification framework based on the Hellinger distance. Our certification technique scales to ImageNet-scale datasets, complex models, and a diverse set of loss functions. We then focus on one specific application enabled by such scalability and flexibility, i.e., certifying out-of-domain generalization for large neural networks and loss functions such as accuracy and AUC. We experimentally validate our certification method on a number of datasets, ranging from ImageNet, where we provide the first non-vacuous certified out-of-domain generalization, to smaller classification tasks where we are able to compare with the state-of-the-art and show that our method performs considerably better.Comment: 39th International Conference on Machine Learning (ICML) 202

    European institutions?

    Get PDF
    © 2016 The British Society for Phenomenology. The aim of this article is to sketch a phenomenological theory of political institutions and to apply it to some objections and questions raised by Pierre Manent about the project of the European Union and more specifically the question of “European Construction”, i.e. what is the aim of the European Project. Such a theory of political institutions is nested within a broader phenomenological account of institutions, dimensions of which I have tried to elaborate elsewhere. As a working conceptual delineation, we can describe institutions as (relatively) stable meaning structures. As such, the definition encompasses phenomena like the European Commission, Belgium, marriage, the Dollar, the Labour Party, but also political subjects themselves. In order to develop said theory of institutions, I will draw primarily upon resources in the work of Maurice Merleau-Ponty and John Searle

    TSS: Transformation-Specific Smoothing for Robustness Certification

    Full text link
    As machine learning (ML) systems become pervasive, safeguarding their security is critical. However, recently it has been demonstrated that motivated adversaries are able to mislead ML systems by perturbing test data using semantic transformations. While there exists a rich body of research providing provable robustness guarantees for ML models against p\ell_p norm bounded adversarial perturbations, guarantees against semantic perturbations remain largely underexplored. In this paper, we provide TSS -- a unified framework for certifying ML robustness against general adversarial semantic transformations. First, depending on the properties of each transformation, we divide common transformations into two categories, namely resolvable (e.g., Gaussian blur) and differentially resolvable (e.g., rotation) transformations. For the former, we propose transformation-specific randomized smoothing strategies and obtain strong robustness certification. The latter category covers transformations that involve interpolation errors, and we propose a novel approach based on stratified sampling to certify the robustness. Our framework TSS leverages these certification strategies and combines with consistency-enhanced training to provide rigorous certification of robustness. We conduct extensive experiments on over ten types of challenging semantic transformations and show that TSS significantly outperforms the state of the art. Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset. For instance, our framework achieves 30.4% certified robust accuracy against rotation attack (within ±30\pm 30^\circ) on ImageNet. Moreover, to consider a broader range of transformations, we show TSS is also robust against adaptive attacks and unforeseen image corruptions such as CIFAR-10-C and ImageNet-C.Comment: 2021 ACM SIGSAC Conference on Computer and Communications Security (CCS '21

    Toward reliability in the NISQ era: robust interval guarantee for quantum measurements on approximate states

    Get PDF
    Near-term quantum computation holds potential across multiple application domains. However, imperfect preparation and evolution of states due to algorithmic and experimental shortcomings, characteristic in the near-term implementation, would typically result in measurement outcomes deviating from the ideal setting. It is thus crucial for any near-term application to quantify and bound these output errors. We address this need by deriving robustness intervals which are guaranteed to contain the output in the ideal setting. The first type of interval is based on formulating robustness bounds as semidefinite programs, and uses only the first moment and the fidelity to the ideal state. Furthermore, we consider higher statistical moments of the observable and generalize bounds for pure states based on the non-negativity of Gram matrices to mixed states, thus enabling their applicability in the NISQ era where noisy scenarios are prevalent. Finally, we demonstrate our results in the context of the variational quantum eigensolver (VQE) on noisy and noiseless simulations

    Problems and Paradoxes in Economic and Social Policies of Modern Welfare States

    Full text link
    Relationships between economic growth rates and the expansion of welfare expenditures in Western nations are examined. The point is made that real gross national product grew rapidly from about 1959 until about 1973, but that since 1973 it has either grown slowly or not at all, while welfare expenditures and entitlements have continued to escalate. Forecasts of a variety of important economic variables in these countries for the near term are presented and discussed, and it is concluded that despite the current modest economic improvement, difficulties in funding welfare states will continue throughout the remainder of the 1980s. Some consideration is given to problems in welfare states to the end of the century, and further difficulties in funding and managing these states are forecast for this period as well. Problems of welfare states are not regarded as short-term by-products of maladjustments experienced in the Western world in the last 10 years but rather as long-term characteristics.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/67103/2/10.1177_000271628547900102.pd
    corecore